Deepfake facial recognition
02 August, 2020: By Ajoy Maitra
With enhanced machine learning technologies rushing into the Fourth Industrial Revolution for digitisation of services, the world has transformed a lot within few months. Digital ease was recognised during lockdowns due to the pandemic outbreak of COVID-19 directing corporates to adopt technological modulation to cover for the economic hit.

Interesting Technologies often seen in movies like: James Bond, Mission Impossible, Kingsman, Iron Man and others, refers to the vast implementations which we often enjoy watching. However, such advancements are localising to the future of development at even a faster pace than we had predicted. Have we come across the word: DEEPFAKE! Or maybe its totally a new concept you may be knowing of.
The term "deepfake" was first coined in late 2017 by a Reddit user of the same name. This user created a space on the online news and aggregation site, where they shared pornographic videos that used open source face-swapping technology.

History The world believes that Woodrow Wilson Bledsoe was the father of facial recognition. In the 1960s, Bledsoe created a system that could organize faces' photos by hand using the RAND tablet. The tablet is a device people could use to enter vertical and horizontal coordinates on a grid with the help of a stylus that released electromagnetic pulses. People used that system to manually record the coordinate areas of facial features like eyes, nose, mouth, and hairline. It was in the 1970s when Harmon, Goldstein, and Lesk made the manual facial recognition system more accurate. The three used 21 facial markers including lip thickness and hair color, to detect faces automatically. However, the Bledsoe's system was still computed with actual biometrics, done manually.
Then in the 1993-2000s period, DARPA and NIST released the FERET program to encourage the commercial facial recognition market. In 2002, law enforcement officials applied facial recognition in critical technology testing.

Deepfakes to Save Million Dollars During this lockdown, it became very difficult for the corporates to promote their products involving videos of renowned actors. However, with this Deepfake technology it is possible to modulate the voice and the facial expressions similar to the renowned. We have already spent most of our time with an application, we never knew in details of how it actually worked. Its the SNAPCHAT, where the facial expressions are matched with many other collages of animals or funny faces created.

Voice Recognition
With the basics of face detection combined to machine learning training of the datasets, a whole new level of deepfakes has been unlocked. The training of the faces takes place on hundreds and thousands of faces to be able to match them in swapping faces. Along with this, vocal modulation was also presented 'ADOBE VOCO' during the Adobe MAX 2016 Sneak Peeks, co-hosted by Jordan Peele, where he manipulated a simple voice of an individual to say something else according to what is typed. Not just that, it is through only 40 minutes of an individual's speech, the machine can learn each and every sounds of the language made by the individual.
Thus, deepfake technology has the potential to save million of dollars spent on setting an environment for a shoot to a complete video shoot of advertisements. It can also be used to help in creating training videos at low cost and time.

However, through trained algorithms it is also possible to detect deepfakes which may be difficult for a normal person to differentiate. Such algorithms are often used as a security to counter such breach of privacy along with blockchain technologies which makes it impossible to alter the originally stored videos.